31 research outputs found

    Raum-Zeit Interpolationstechniken

    Get PDF
    The photo-realistic modeling and animation of complex scenes in 3D requires a lot of work and skill of artists even with modern acquisition techniques. This is especially true if the rendering should additionally be performed in real-time. In this thesis we follow another direction in computer graphics to generate photo-realistic results based on recorded video sequences of one or multiple cameras. We propose several methods to handle scenes showing natural phenomena and also multi-view footage of general complex 3D scenes. In contrast to other approaches, we make use of relaxed geometric constraints and focus especially on image properties important to create perceptually plausible in-between images. The results are novel photo-realistic video sequences rendered in real-time allowing for interactive manipulation or to interactively explore novel view and time points.Das Modellieren und die Animation von 3D Szenen in fotorealistischer Qualität ist sehr arbeitsaufwändig, auch wenn moderne Verfahren benutzt werden. Wenn die Bilder in Echtzeit berechnet werden sollen ist diese Aufgabe um so schwieriger zu lösen. In dieser Dissertation verfolgen wir einen alternativen Ansatz der Computergrafik, um neue photorealistische Ergebnisse aus einer oder mehreren aufgenommenen Videosequenzen zu gewinnen. Es werden mehrere Methoden entwickelt die für natürlicher Phänomene und für generelle Szenen einsetzbar sind. Im Unterschied zu anderen Verfahren nutzen wir abgeschwächte geometrische Einschränkungen und berechnen eine genaue Lösung nur dort wo sie wichtig für die menschliche Wahrnehmung ist. Die Ergebnisse sind neue fotorealistische Videosequenzen, die in Echtzeit berechnet und interaktiv manipuliert, oder in denen neue Blick- und Zeitpunkte der Szenen frei erkundet werden können

    Ort-Zeit Interpolation durch Bildbasiertes Morphen

    Get PDF
    Rendering convincing transitions between individual pictures is the main challenge in image-based rendering and keyframe animation as well as the prerequisite for many stunning visual effects. We present a perception-based method for automatic image interpolation, achieving psycho-visually plausible transitions between real-world images in real-time. Based on recent discoveries in perception research, we propose an optical flow-based warping refinement method and an adaptive non-linear image blending scheme to guarantee perceptional plausibility of the interpolated in-between images. Conventional, uncalibrated photographs suffice to convincingly interpolate across space, time, and between different objects, without the need to recover 3D scene geometry, actual motion, or camera calibration. Using off-the-shelf digital cameras, we demonstrate how to continuously navigate the viewpoint between camera positions and shutter release times, how to animate still pictures, create smooth camera motion paths, and how to convincingly morph between depictions of different objects

    Training of PDCA cycle using a catapult in a virtual learning environment

    Get PDF
    The sustainable teaching of quality methods in the sense of Lean Management and Six Sigma through assistance systems, such as virtual reality goggles, represents a new and growing aspect of continuing education programs. The development and usage of virtual learning environments offers the chance to deepen the theoretical prior knowledge through interactive learning possibilities. In this way, existing learning concepts are supplemented with virtual teaching content. Complex or difficult to present learning settings can be mapped virtually without high material consumption or costs. This paper presents the state of the art with respect to teaching quality methods with VR. An integrationin the created Assisted Reality Implementation Model is made. Subsequently, the requirements for a virtual learning environment based on a real business game are determined. The approach and implementation are explained using the example of the PDCA quality method. First results of the exploration tests from the questionnaires are presented. Based on this, improvements are derived and the next steps are defined

    Spacetime Tetrahedra: Bildbasierte Blickpunktnavigation durch Raum und Zeit

    Get PDF
    We present a purely image-based rendering system to viewpoint-navigate through space and time of arbitrary dynamic scenes. Unlike previous methods, our approach does not rely on synchronized and calibrated multi-video footage as input. Instead of estimating scene depth or reconstructing 3D geometry, our approach is based on dense image correspondences, treating view interpolation equally in space and time. In a nutshell, we tetrahedrally partition the volume spanned by camera directions and time, determine the warp field along each tetrahedral edge, and warp-blend-interpolate any viewpoint inside a tetrahedron from the four video frames representing its vertices. Besides fast and easy acquisition to make outdoor recordings feasible, our space-time symmetric approach allows for smooth interpolation of view perspective and time, i.e., for simultaneous free-viewpoint and slow motion rendering

    Wahrnehmungsbasierte Bildinterpolation

    Get PDF
    We present a perception-based method for image interpolation, aiming for perceptually convincing transitions between real-world images. Without 3D geometry or scene motion, perception-based image interpolation enables smooth viewpoint navigation across space and time. We show how global visual effects can be created from a collection of unsynchronized, uncalibrated images. A user study confirms the perceptual quality of the proposed image interpolation approach

    Keyframe Animation from Video

    No full text
    This paper proposes a method for analyzing and synthesizing video sequences, specifically suited for image sequences of natural phenomena. We combine a low-dimensional representation of arbitrary image sequences and an image morphing technique to create realistic in-between images. The visualization based on the Isomap algorithm allows users to easily select parts of the video that have periodic character. From these segments, new sequences can be synthesized in realtime. To smooth transition artifacts between the reordered subsequences a Monge-Kantorovich based image morphing method is applied to interpolate in-between images. Our approach is useful for e.g. video key-frame animation, automatic looping or creating slow-motion sequences. 1

    Periodic Temporal Super Resolution Based on Phase Registration and Manifold Reconstruction

    No full text

    Keyframe Animation from Video

    No full text
    This paper proposes a method for analyzing and synthesizing video sequences, specifically suited for image sequences of natural phenomena. We combine a low-dimensional representation of arbitrary image sequences and an image morphing technique to create realistic in-between images. The visualization based on the Isomap algorithm allows users to easily select parts of the video that have periodic character. From these segments, new sequences can be synthesized in realtime. To smooth transition artifacts between the reordered subsequences a Monge-Kantorovich based image morphing method is applied to interpolate in-between images. Our approach is useful for e.g. video key-frame animation, automatic looping or creating slow-motion sequences. Index Terms- Image sequence analysis, Animation 1

    Abstract Learning Flames

    No full text
    In this work we propose a novel approach for realistic fire animation and manipulation. We apply a statistical learning method to an image sequence of a real-world flame to jointly capture flame motion and appearance characteristics. A low-dimensional generic flame model is then robustly matched to the video images. The model parameter values are used as input to drive an Expectation-Maximization algorithm to learn an auto regressive process with respect to flame dynamics. The generic flame model and the trained motion model enable us to synthesize new, unique flame sequences of arbitrary length in real-time.
    corecore